Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
There is a large body of work studying what forms of computational hardness are needed to realize classical cryptography. In particular, one-way functions and pseudorandom generators can be built from each other, and thus require equivalent computational assumptions to be realized. Furthermore, the existence of either of these primitives implies that , which gives a lower bound on the necessary hardness.One can also define versions of each of these primitives with quantum output: respectively one-way state generators and pseudorandom state generators. Unlike in the classical setting, it is not known whether either primitive can be built from the other. Although it has been shown that pseudorandom state generators for certain parameter regimes can be used to build one-way state generators, the implication has not been previously known in full generality. Furthermore, to the best of our knowledge, the existence of one-way state generators has no known implications in complexity theory.We show that pseudorandom states compressing bits to qubits can be used to build one-way state generators and pseudorandom states compressing bits to qubits are one-way state generators. This is a nearly optimal result since pseudorandom states with fewer than -qubit output can be shown to exist unconditionally. We also show that any one-way state generator can be broken by a quantum algorithm with classical access to a oracle.An interesting implication of our results is that a -copy one-way state generator exists unconditionally, for every . This contrasts nicely with the previously known fact that -copy one-way state generators require computational hardness. We also outline a new route towards a black-box separation between one-way state generators and quantum bit commitments.more » « lessFree, publicly-accessible full text available March 27, 2026
-
Meka, Raghu (Ed.){"Abstract":["We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form y,f(y) + ε where ε is small, and f is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit C that with probability, say, 1/3 over a new sample y' according to the same distributions, approximates f(y') (i.e., |C(y')-f(y')| is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions f to polynomial-size computable functions.\r\nWe demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE "overshoots" the task of public-key encryption.\r\nWe complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions."]}more » « lessFree, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available December 2, 2025
-
Let Kt ( x ) denote the Levin-Kolmogorov Complexity of the string x , and let MKtP denote the language of pairs ( x , k ) having the property that Kt ( x ) ≤ k. We demonstrate that: • MKtP ∉ Heur neg BPP (i.e., MKtP is two-sided error mildly average-case hard) iff infinitely-often OWFs exist. • MKtP ∉ Avg neg BPP (i.e., MKtP is errorless mildly average-case hard) iff EXP ≠ BPP. Taken together, these results show that the only "gap" toward getting (infinitely-often) OWFs from the assumption that EXP ≠ BPP is the seemingly "minor" technical gap between two-sided error and errorless average-case hardness of the MKtP problem.more » « less
-
Ta-Shma, Amnon (Ed.)A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated "hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which e.g., prBPP = prP (so-called "high-end derandomization), or prBPP ⊆ prSUBEXP (so-called "low-end derandomization), and more generally, under which prBPP ⊆ prDTIME(𝒞) where 𝒞 is a "nice" class (closed under composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization. In this work, following the recent work by Chen and Tell (FOCS’21) that considers "almost-all-input" hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider "almost-all-input" leakage-resilient hardness of a function f - that is, hardness of computing f(x) even given, say, √|x| bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization), both in the high-end and in the low-end setting. In more detail, we show that there exists a constant c such that for every function T, the following are equivalent: - prBPP ⊆ prDTIME(poly(T(poly(n)))); - Existence of a poly(T(poly(n)))-time computable function f :{0,1}ⁿ → {0,1}ⁿ that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms. As far as we know, this is the first assumption that characterizes derandomization in both the low-end and the high-end regime. Additionally, our characterization naturally extends also to derandomization of prMA, and also to average-case derandomization, by appropriately weakening the requirements on the function f. In particular, for the case of average-case (a.k.a. "effective") derandomization, we no longer require the function to be almost-all-input hard, but simply satisfy the more standard notion of average-case leakage-resilient hardness (w.r.t., every samplable distribution), whereas for derandomization of prMA, we instead consider leakage-resilience for relations.more » « less
An official website of the United States government

Full Text Available